150 research outputs found

    A Domain-Specific Language and Editor for Parallel Particle Methods

    Full text link
    Domain-specific languages (DSLs) are of increasing importance in scientific high-performance computing to reduce development costs, raise the level of abstraction and, thus, ease scientific programming. However, designing and implementing DSLs is not an easy task, as it requires knowledge of the application domain and experience in language engineering and compilers. Consequently, many DSLs follow a weak approach using macros or text generators, which lack many of the features that make a DSL a comfortable for programmers. Some of these features---e.g., syntax highlighting, type inference, error reporting, and code completion---are easily provided by language workbenches, which combine language engineering techniques and tools in a common ecosystem. In this paper, we present the Parallel Particle-Mesh Environment (PPME), a DSL and development environment for numerical simulations based on particle methods and hybrid particle-mesh methods. PPME uses the meta programming system (MPS), a projectional language workbench. PPME is the successor of the Parallel Particle-Mesh Language (PPML), a Fortran-based DSL that used conventional implementation strategies. We analyze and compare both languages and demonstrate how the programmer's experience can be improved using static analyses and projectional editing. Furthermore, we present an explicit domain model for particle abstractions and the first formal type system for particle methods.Comment: Submitted to ACM Transactions on Mathematical Software on Dec. 25, 201

    A new class of highly efficient exact stochastic simulation algorithms for chemical reaction networks

    Full text link
    We introduce an alternative formulation of the exact stochastic simulation algorithm (SSA) for sampling trajectories of the chemical master equation for a well-stirred system of coupled chemical reactions. Our formulation is based on factored-out, partial reaction propensities. This novel exact SSA, called the partial propensity direct method (PDM), is highly efficient and has a computational cost that scales at most linearly with the number of chemical species, irrespective of the degree of coupling of the reaction network. In addition, we propose a sorting variant, SPDM, which is especially efficient for multiscale reaction networks.Comment: 23 pages, 3 figures, 4 tables; accepted by J. Chem. Phy

    A robustness measure for singular point and index estimation in discretized orientation and vector fields

    Full text link
    The identification of singular points or topological defects in discretized vector fields occurs in diverse areas ranging from the polarization of the cosmic microwave background to liquid crystals to fingerprint recognition and bio-medical imaging. Due to their discrete nature, defects and their topological charge cannot depend continuously on each single vector, but they discontinuously change as soon as a vector changes by more than a threshold. Considering this threshold of admissible change at the level of vectors, we develop a robustness measure for discrete defect estimators. Here, we compare different template paths for defect estimation in discretized vector or orientation fields. Sampling prototypical vector field patterns around defects shows that the robustness increases with the length of template path, but less so in the presence of noise on the vectors. We therefore find an optimal trade-off between resolution and robustness against noise for relatively small templates, except for the "single pixel" defect analysis, which cannot exclude zero robustness. The presented robustness measure paves the way for uncertainty quantification of defects in discretized vector fields.Comment: 4 pages, 1 figur

    A Method for Modeling Growth of Organs and Transplants Based on the General Growth Law: Application to the Liver in Dogs and Humans

    Full text link
    Understanding biological phenomena requires a systemic approach that incorporates different mechanisms acting on different spatial and temporal scales, since in organisms the workings of all components, such as organelles, cells, and organs interrelate. This inherent interdependency between diverse biological mechanisms, both on the same and on different scales, provides the functioning of an organism capable of maintaining homeostasis and physiological stability through numerous feedback loops. Thus, developing models of organisms and their constituents should be done within the overall systemic context of the studied phenomena. We introduce such a method for modeling growth and regeneration of livers at the organ scale, considering it a part of the overall multi-scale biochemical and biophysical processes of an organism. Our method is based on the earlier discovered general growth law, postulating that any biological growth process comprises a uniquely defined distribution of nutritional resources between maintenance needs and biomass production. Based on this law, we introduce a liver growth model that allows to accurately predicting the growth of liver transplants in dogs and liver grafts in humans. Using this model, we find quantitative growth characteristics, such as the time point when the transition period after surgery is over and the liver resumes normal growth, rates at which hepatocytes are involved in proliferation, etc. We then use the model to determine and quantify otherwise unobservable metabolic properties of livers.Comment: 13 pages, 6 figure

    PPF - A Parallel Particle Filtering Library

    Full text link
    We present the parallel particle filtering (PPF) software library, which enables hybrid shared-memory/distributed-memory parallelization of particle filtering (PF) algorithms combining the Message Passing Interface (MPI) with multithreading for multi-level parallelism. The library is implemented in Java and relies on OpenMPI's Java bindings for inter-process communication. It includes dynamic load balancing, multi-thread balancing, and several algorithmic improvements for PF, such as input-space domain decomposition. The PPF library hides the difficulties of efficient parallel programming of PF algorithms and provides application developers with the necessary tools for parallel implementation of PF methods. We demonstrate the capabilities of the PPF library using two distributed PF algorithms in two scenarios with different numbers of particles. The PPF library runs a 38 million particle problem, corresponding to more than 1.86 GB of particle data, on 192 cores with 67% parallel efficiency. To the best of our knowledge, the PPF library is the first open-source software that offers a parallel framework for PF applications.Comment: 8 pages, 8 figures; will appear in the proceedings of the IET Data Fusion & Target Tracking Conference 201

    A Self-organizing Adaptive-resolution Particle Method with Anisotropic Kernels

    Get PDF
    AbstractAdaptive-resolution particle methods reduce the computational cost for problems that develop a wide spectrum of length scales in their solution. Concepts from self-organization can be used to determine suitable particle distributions, sizes, and numbers at runtime. If the spatial derivatives of the function strongly depend on the direction, the computational cost and the required number of particles can be further reduced by using anisotropic particles. Anisotropic particles have ellipsoidal influence regions (shapes) that are locally aligned with the direction of smallest variation of the function. We present a framework that allows consistent evaluation of linear differential operators on arbitrary distributions of anisotropic particles. We further extend the concept of particle self-organization to anisotropic particles, where also the directions and magnitudes of anisotropy are self-adapted. We benchmark the accuracy and efficiency of the method in a number of 2D and 3D test cases
    • …
    corecore